-
Notifications
You must be signed in to change notification settings - Fork 20
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Remove resource limit #224
Remove resource limit #224
Conversation
Merge Failed. This change or one of its cross-repo dependencies was unable to be automatically merged with the current state of its repository. Please rebase the change and upload a new patchset. |
6bdda49
to
dc74452
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Look OK to me as a hotfix.
Build failed (check pipeline). Post https://softwarefactory-project.io/zuul/t/rdoproject.org/buildset/628a6287e4a14ffea3aaa99126fedece ❌ openstack-k8s-operators-content-provider FAILURE in 8m 31s |
This patch removes the resource limit for test operator pods as we are hitting both issue when: - the limit is too low - the limit is too high (scheduler does not have where to schedule the pod) This is a hotfix before we expose the resource limit values through the test-operator CRs and find the correct default values. Related PRs: - openstack-k8s-operators#222 - openstack-k8s-operators#205
dc74452
to
dd4c6fe
Compare
[APPROVALNOTIFIER] This PR is APPROVED This pull-request has been approved by: gibizer, kopecmartin, lpiwowar The full list of commands accepted by this bot can be found here. The pull request process is described here
Needs approval from an approver in each of these files:
Approvers can indicate their approval by writing |
/lgtm |
/test test-operator-build-deploy |
2fef366
into
openstack-k8s-operators:main
This patch introduces back limits for the pods spawned by the test-operator after they were increased and later removed with these two PRs [1][2]. The problem with the previous two patches was that they only set the Resources.Limits field and not the Resources.Requests field. When Resources.Limits is set and Resources.Requests is empty then it inherrits the value from Resources.Limits. Therefore, we first hit the OOM killed issue when we set the Resources.Limits too low and later when we increased the value we hit the "Insufficient memory" error (due to high value in Resources.Requests field) This patch addresses the above mentioned issue by: - setting sane default values for Resource.Limits - setting sane default values for Resource.Requests and - introduces new parameter called .Spec.Resources which can be used to change the default values. [1] openstack-k8s-operators#222 [2] openstack-k8s-operators#224
This patch introduces back limits for the pods spawned by the test-operator after they were increased and later removed with these two PRs [1][2]. The problem with the previous two patches was that they only set the Resources.Limits field and not the Resources.Requests field. When Resources.Limits is set and Resources.Requests is empty then it inherrits the value from Resources.Limits. Therefore, we first hit the OOM killed issue when we set the Resources.Limits too low and later when we increased the value we hit the "Insufficient memory" error (due to high value in Resources.Requests field) This patch addresses the above mentioned issue by: - setting sane default values for Resource.Limits - setting sane default values for Resource.Requests and - introduces new parameter called .Spec.Resources which can be used to change the default values. [1] openstack-k8s-operators#222 [2] openstack-k8s-operators#224
This patch removes the resource limit for test operator pods as we are hitting both issue when:
This is a hotfix before we expose the resource limit values through the test-operator CRs and find the correct default values.
Related PRs: